Scaling External Knowledge Input Beyond Context Windows of LLMs via Multi-Agent Collaboration

A multi-agent approach for scalable knowledge integration surpassing LLM context window limits

Published

May 27, 2025

Authors: Z. Liu et al.

Published on Arxiv: 2025-05-27

Link: http://arxiv.org/abs/2505.21471v1

Institutions: Tsinghua University, Department of Computer Science & Technology, Institute for AI • Institute for AI Industry Research (AIR), Tsinghua University • Tongyi Lab, Alibaba Group

Keywords: Large Language Models, multi-agent systems, context window extension, knowledge integration, retrieval-augmented generation, distributed reasoning, question answering, survey generation, scalability, global synchronization

Large Language Models (LLMs) have made notable progress in expanding context windows, yet real-world tasks often require integrating far more external knowledge than these models can accommodate. Existing methods to overcome context window limitations—such as retrieval-augmented generation and current multi-agent systems—result in information loss or fail to scale efficiently as input size grows.

To address these challenges, the authors introduce a novel method, detailed below:

The results section further substantiates the framework’s effectiveness:

Building upon these findings, final conclusions are drawn: